During our course we will discuss control methods over dynamical systems, our task can be summarized as follows: to design a control algorithms that will cause the controlled object to perform the desired behavior even in the presence of possible disturbances
But before we will dig in to details let us recall what is the main components of control system are:
From the scheme above we may distinguish following:
Operator/User - is providing desired behavior to the control system by means of desired commands
Plant - is the device/software we are trying to control and it's consist of:
Controller - is the software + hardware that take outputs from the plant/simulator and produce appropriate input based on user commands, the controller itself may consist of different blocks, i.e. planner, regulator, observer, estimator etc.
Environment - the system and world is in the constant bilateral interacting process that may be either beneficial or reduce the system performance by inducing disturbances
In this course we will mainly focus on design of advanced controllers, however to facilitate this we mast ensure that we have a clear picture of the object we are going to control.
So today we will try to:
While doing so we will have two assumptions (at least for today) namely plant actuators are ideal and supply the control inputs to plant without any distortions. The full state of plant is measurable without any noise (sensors are ideal as well)
A dynamical system is a system whose behaviour, indicated by its output
signal, evolves over time, possibly under the influence of external inputs.
Examples of dynamical systems include:
Whatever objects with some quantities changing in time can be viewed as dynamical system
A dynamical system may be
Mathematical model is the abstraction of real world. While one building model of process there are a lot of simplifications may be made, and even seemingly accurate models are never perfectly describe the underlying process. A model should be as simple as possible, and no simpler.
Anything in the physical or biological world, whether natural or involving technology, is subject to analysis by mathematical models if it can be described in terms of mathematical expressions.
For the purpose of control design, a suitable model of the system should be used.
Different kinds of models used in control design are:
At present, mostly state-space models are used due to their generality, simplicity of implementation and mature mathematical apparatus that simplifies their analysis. However, impulse responce and transfer function may be usefull as well. For instance if one is interested in input output relationships, or som terminal properties of a system, impulse
response models or transfer function descriptions can be used.
However in this course we will stick to the state space models, since they are widely used in modern control systems.
These models are based on the concept of the state of the system.
The state of a system is the smallest set of variables (called state variables)
such that the knowledge of these variables at some time together with
the knowledge of the input for all from to completely
determines the behavior of the system for any time .
A state-space model of a system is expressed with a set of first-order
differential equations, one for each state variable.
Linear control theory has been predominantly concerned with the study of linear timeinvariant (LTI) control systems, of the form:
with being a vector of states, the system matrix, input (control) vector and is the input matrix.
LTI systems have quite simple properties, such as:
Example: Mass-Spring-Damper
And one can formulate this system in state space as:
Physical systems are inherently nonlinear. Thus, all control systems are nonlinear to a
certain extent.
Nonlinear control systems can be described by nonlinear differential equations.
where is some nonlinear smooth function.
Nonlinear systems in contrast to linear:
The form above is fairly general, however there are known special cases, like control-affine and drift-less systems which we will study a bit later.
Example: Nonlinear Pendulum
Given state we may formulate equation above as:
Example: Variable Mass Lander
Noting that is vary we need to include it to state, thus and equation above is equalient to:
Equation of motion for most mechanical systems may be written in following form:
where:
One can easily transform the mechanical system to the state space form by defining the state :
Example: Floating Rigid Body
The model of floating rigid body described by its postion , linear and angular velocity subject to external force and torque :
To rewrite the following in the general form one may define following:
Example: Artificial Satellite
The artificial satellite orbiting planet may be described using Newton gravity theory as:
Given the generilized coordinates as and control one can express above in the general form:
Example: Cart Pole
Let us consider the cart pole described by:
Defyning the generilized coordinates as and matching the terms yields:
Note that the choice of a set of state variables for a system is not unique. So
a state-space model for a system is also not unique. There can be infinitely many possibilities.
When the model is obtained from a known differential equation, you can
always select meaningful state variables. However, state variables may not
directly relate to physical variables if the model is obtained from
identification methods.
It should be noted that there are special cases when the equations above maybe simplified or some interesting properties may be exploited, examples are:
We will consider some of these later on.
Some dynamical systems are described in discrete-times (which can be
counted) rather than in continuous time. e.g., system representing a bank
account whose balance is reported once everyday.
Discrete systems are described by difference equations:
Some systems are inherently discrete, whereas many continuous-time
systems are modeled in discrete-time for easier analysis and design with
digital computers.
In the field of robotics the most models are actually derived from continues differential equations while the controller is implemented in digital form (software). Thus the overall control system may be described as follows:
So for the purpose of control and analysis, it always useful to obtain a model of the
system in discrete times. Obtaining a discrete-time model is referred to as
discretization.
For a LTI system, we can obtain a exact discrete-time model if
the input remains constant over each sampling period.
Exact discretization of time-varying/nonlinear systems are difficult or may not be analytically possible.
Approximate discrete-time models are widely used in practice. For a small
sampling time, , we can write, for the nonlinear system :
And for a linear system, this becomes
This is called discretization by Euler method.
Example:
Consider the nonlinear pendulum described by following state space representation (assuming all parameters are ):
The discrete model is then given by following set of difference equations:
While studying ODE , one is often interested in its solution (integral curve):
The simulation is nothing but tacking the integral above.
However, in most practical situations the above cannot be solved analytically and one should consider numerical integration instead, thus ending up with descrete system:
Thus simulation is just iteration over the descrete dynamics starting from initial point
Let us implement the simulation of nonlinear pendulum via iterating the discrete dynamics:
import numpy as np def f(state, t, control): u = control x1, x2 = state dx1 = x2 dx2 = u - np.sin(x1) - 0*x2 return np.array([dx1, dx2])
x_0 = np.array([1,0]) T = 2E-2 tf = 10 N = int(tf/T) X = [] # ITERATE DESCRETE DYNAMICS x_prev = x_0 for k in range(N): X.append(x_prev) u_k = 0 x_new = x_prev + T*f(x_prev, k*T, u_k) x_prev = x_new x_sol_simp = np.array(X)
Let us plot the result:
from matplotlib.pyplot import * plot(x_sol_simp, linewidth=2.0) grid(color='black', linestyle='--', linewidth=1.0, alpha = 0.7) grid(True) # xlim([t0, tf]) ylabel(r'State $x_k$') xlabel(r'Samples $k$') show()
The Euler method implemented above is highly dependent on the sampling period , there are other suitable methods, the most widely used is the 4-th order Runge-Kutta and advanced variational integrators. However, we will not dig into the integration algorithm, instead for the purpose of simulation we will use odeint from the scipy.integrate:
from scipy.integrate import odeint # import integrator routine scale = 5 X = [] x_prev = x_0 for k in range(N): X.append(x_prev) t_k = np.linspace(k*T, (k+1)*T, scale) u_k = 0 x_new = odeint(f, x_prev, t_k, args = (u_k,)) x_prev = x_new[-1,:] x_sol = np.array(X)
plot(x_sol, linewidth=2.0) plot(x_sol_simp, linewidth=2.0) grid(color='black', linestyle='--', linewidth=1.0, alpha = 0.7) grid(True) ylabel(r'State $x_k$') xlabel(r'Samples $k$') show()
How to make the given dynamical system display desired behavior? this is one of the questions of concern in the field of control theory. One of the most widely used approaches supporting the solution of the problems above is the so-called feedback control
Let us now assume that one have designed feedback law as follows:
One may substitute control law and obtain the equations of the closed loop system:
now this system is unforced and may be analyzed as it is no control at all - basically we have changed the overall nature of plant - the governing dynamics.
In the next lecture we will learn how to design the controller functions and analyze closed loop response with different numerical and analytical tools, while for today we will move to the practice when we will focus on the implementation side of the control system.